2,899 research outputs found

    Fine-grained timing using genetic programming

    Get PDF
    In previous work, we have demonstrated that it is possible to use Genetic Programming to minimise the resource consumption of software, such as its power consumption or execution time. In this paper, we investigate the extent to which Genetic Programming can be used to gain fine-grained control over software timing. We introduce the ideas behind our work, and carry out experimentation to find that Genetic Programming is indeed able to produce software with unusual and desirable timing properties, where it is not obvious how a manual approach could replicate such results. In general, we discover that Genetic Programming is most effective in controlling statistical properties of software rather than precise control over its timing for individual inputs. This control may find useful application in cryptography and embedded systems

    Mitigating Branch-Shadowing Attacks on Intel SGX using Control Flow Randomization

    Full text link
    Intel Software Guard Extensions (SGX) is a promising hardware-based technology for protecting sensitive computations from potentially compromised system software. However, recent research has shown that SGX is vulnerable to branch-shadowing -- a side channel attack that leaks the fine-grained (branch granularity) control flow of an enclave (SGX protected code), potentially revealing sensitive data to the attacker. The previously-proposed defense mechanism, called Zigzagger, attempted to hide the control flow, but has been shown to be ineffective if the attacker can single-step through the enclave using the recent SGX-Step framework. Taking into account these stronger attacker capabilities, we propose a new defense against branch-shadowing, based on control flow randomization. Our scheme is inspired by Zigzagger, but provides quantifiable security guarantees with respect to a tunable security parameter. Specifically, we eliminate conditional branches and hide the targets of unconditional branches using a combination of compile-time modifications and run-time code randomization. We evaluated the performance of our approach by measuring the run-time overhead of ten benchmark programs of SGX-Nbench in SGX environment

    Barriers That Influence Adoption of ACL Injury Prevention Programs Among High School Girls’ Soccer Coaches

    Get PDF
    Click the PDF icon to download the abstract

    Evaluating Modeling and Validation Strategies for Tooth Loss

    Get PDF
    Prediction models learn patterns from available data (training) and are then validated on new data (testing). Prediction modeling is increasingly common in dental research. We aimed to evaluate how different model development and validation steps affect the predictive performance of tooth loss prediction models of patients with periodontitis. Two independent cohorts (627 patients, 11,651 teeth) were followed over a mean ± SD 18.2 ± 5.6 y (Kiel cohort) and 6.6 ± 2.9 y (Greifswald cohort). Tooth loss and 10 patient- and tooth-level predictors were recorded. The impact of different model development and validation steps was evaluated: 1) model complexity (logistic regression, recursive partitioning, random forest, extreme gradient boosting), 2) sample size (full data set or 10%, 25%, or 75% of cases dropped at random), 3) prediction periods (maximum 10, 15, or 20 y or uncensored), and 4) validation schemes (internal or external by centers/time). Tooth loss was generally a rare event (880 teeth were lost). All models showed limited sensitivity but high specificity. Patients' age and tooth loss at baseline as well as probing pocket depths showed high variable importance. More complex models (random forest, extreme gradient boosting) had no consistent advantages over simpler ones (logistic regression, recursive partitioning). Internal validation (in sample) overestimated the predictive power (area under the curve up to 0.90), while external validation (out of sample) found lower areas under the curve (range 0.62 to 0.82). Reducing the sample size decreased the predictive power, particularly for more complex models. Censoring the prediction period had only limited impact. When the model was trained in one period and tested in another, model outcomes were similar to the base case, indicating temporal validation as a valid option. No model showed higher accuracy than the no-information rate. In conclusion, none of the developed models would be useful in a clinical setting, despite high accuracy. During modeling, rigorous development and external validation should be applied and reported accordingly

    Quantifying Timing Leaks and Cost Optimisation

    Full text link
    We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then show how to measure the security of a program with respect to this notion via a computable estimate of the timing leakage and use this estimate for cost optimisation.Comment: 16 pages, 2 figures, 4 tables. A shorter version is included in the proceedings of ICICS'08 - 10th International Conference on Information and Communications Security, 20-22 October, 2008 Birmingham, U

    A Novel Scaffold-Based Hybrid Multicellular Model for Pancreatic Ductal Adenocarcinoma-Toward a Better Mimicry of the in vivo Tumor Microenvironment

    Get PDF
    With a very low survival rate, pancreatic ductal adenocarcinoma (PDAC) is a deadly disease. This has been primarily attributed to (i) its late diagnosis and (ii) its high resistance to current treatment methods. The latter specifically requires the development of robust, realistic in vitro models of PDAC, capable of accurately mimicking the in vivo tumor niche. Advancements in the field of tissue engineering (TE) have helped the development of such models for PDAC. Herein, we report for the first time a novel hybrid, polyurethane (PU) scaffold-based, long-term, multicellular (tri-culture) model of pancreatic cancer involving cancer cells, endothelial cells, and stellate cells. Recognizing the importance of ECM proteins for optimal growth of different cell types, the model consists of two different zones/compartments: an inner tumor compartment consisting of cancer cells [fibronectin (FN)-coated] and a surrounding stromal compartment consisting of stellate and endothelial cells [collagen I (COL)-coated]. Our developed novel hybrid, tri-culture model supports the proliferation of all different cell types for 35 days (5 weeks), which is the longest reported timeframe in vitro. Furthermore, the hybrid model showed extensive COL production by the cells, mimicking desmoplasia, one of PDAC's hallmark features. Fibril alignment of the stellate cells was observed, which attested to their activated state. All three cell types expressed various cell-specific markers within the scaffolds, throughout the culture period and showed cellular migration between the two zones of the hybrid scaffold. Our novel model has great potential as a low-cost tool for in vitro studies of PDAC, as well as for treatment screening

    On EPR paradox, Bell's inequalities and experiments which prove nothing

    Full text link
    This article shows that the there is no paradox. Violation of Bell's inequalities should not be identified with a proof of non locality in quantum mechanics. A number of past experiments is reviewed, and it is concluded that the experimental results should be re-evaluated. The results of the experiments with atomic cascade are shown not to contradict the local realism. The article points out flaws in the experiments with down-converted photons. The experiments with neutron interferometer on measuring the "contextuality" and Bell-like inequalities are analyzed, and it is shown that the experimental results can be explained without such notions. Alternative experiment is proposed to prove the validity of local realism.Comment: 27 pages, 8 figures. I edited a little the text and abstract I corrected equations (49) and (50
    • …
    corecore